Distributed Asynchronous Dual-Free Stochastic Dual Coordinate Ascent

نویسندگان

  • Zhouyuan Huo
  • Heng Huang
چکیده

In this paper, we propose a new Distributed Asynchronous Dual-Free Coordinate Ascent method (dis-dfSDCA), and prove that it has linear convergence rate in convex case. Stochastic Dual Coordinate Ascent (SDCA) is a popular method in solving regularized convex loss minimization problems. Dual-Free Stochastic Dual Coordinate Ascent (dfSDCA) method is a variation of SDCA, and can be applied to a more general problem when its dual problem is meaningless. We extend dfSDCA method to distributed version, and provide theoretical analysis. We perform large scale experiments on distributed system and experimental results validate our findings.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent

We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We ...

متن کامل

Hybrid-DCA: A Double Asynchronous Approach for Stochastic Dual Coordinate Ascent

In prior works, stochastic dual coordinate ascent (SDCA) has been parallelized in a multi-core environment where the cores communicate through shared memory, or in a multi-processor distributed memory environment where the processors communicate through message passing. In this paper, we propose a hybrid SDCA framework for multi-core clusters, the most common high performance computing environm...

متن کامل

Network Constrained Distributed Dual Coordinate Ascent for Machine Learning

With explosion of data size and limited storage space at a single location, data are often distributed at different locations. We thus face the challenge of performing largescale machine learning from these distributed data through communication networks. In this paper, we study how the network communication constraints will impact the convergence speed of distributed machine learning optimizat...

متن کامل

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dua...

متن کامل

Accelerated Mini-Batch Stochastic Dual Coordinate Ascent

Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the minibatch setting that is often used in practice. Our main contribution is to introduce an accelerated minibatch version of SDCA and prove a fast convergence rate for this method. We discuss an implementati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016